Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Adv ; 10(10): eadi2525, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38446888

RESUMO

Why do humans spontaneously dance to music? To test the hypothesis that motor dynamics reflect predictive timing during music listening, we created melodies with varying degrees of rhythmic predictability (syncopation) and asked participants to rate their wanting-to-move (groove) experience. Degree of syncopation and groove ratings are quadratically correlated. Magnetoencephalography data showed that, while auditory regions track the rhythm of melodies, beat-related 2-hertz activity and neural dynamics at delta (1.4 hertz) and beta (20 to 30 hertz) rates in the dorsal auditory pathway code for the experience of groove. Critically, the left sensorimotor cortex coordinates these groove-related delta and beta activities. These findings align with the predictions of a neurodynamic model, suggesting that oscillatory motor engagement during music listening reflects predictive timing and is effected by interaction of neural dynamics along the dorsal auditory pathway.


Assuntos
Música , Humanos , Membrana Celular , Córtex Cerebral , Magnetoencefalografia
2.
J Acoust Soc Am ; 154(6): 3799-3809, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38109404

RESUMO

Computational models are used to predict the performance of human listeners for carefully specified signal and noise conditions. However, there may be substantial discrepancies between the conditions under which listeners are tested and those used for model predictions. Thus, models may predict better performance than exhibited by the listeners, or they may "fail" to capture the ability of the listener to respond to subtle stimulus conditions. This study tested a computational model devised to predict a listener's ability to detect an aircraft in various soundscapes. The model and listeners processed the same sound recordings under carefully specified testing conditions. Details of signal and masker calibration were carefully matched, and the model was tested using the same adaptive tracking paradigm. Perhaps most importantly, the behavioral results were not available to the modeler before the model predictions were presented. Recordings from three different aircraft were used as the target signals. Maskers were derived from recordings obtained at nine locations ranging from very quiet rural environments to suburban and urban settings. Overall, with a few exceptions, model predictions matched the performance of the listeners very well. Discussion focuses on those differences and possible reasons for their occurrence.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Humanos , Limiar Auditivo , Ruído , Aeronaves , Simulação por Computador
3.
Front Comput Neurosci ; 17: 1151895, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37265781

RESUMO

Rhythmicity permeates large parts of human experience. Humans generate various motor and brain rhythms spanning a range of frequencies. We also experience and synchronize to externally imposed rhythmicity, for example from music and song or from the 24-h light-dark cycles of the sun. In the context of music, humans have the ability to perceive, generate, and anticipate rhythmic structures, for example, "the beat." Experimental and behavioral studies offer clues about the biophysical and neural mechanisms that underlie our rhythmic abilities, and about different brain areas that are involved but many open questions remain. In this paper, we review several theoretical and computational approaches, each centered at different levels of description, that address specific aspects of musical rhythmic generation, perception, attention, perception-action coordination, and learning. We survey methods and results from applications of dynamical systems theory, neuro-mechanistic modeling, and Bayesian inference. Some frameworks rely on synchronization of intrinsic brain rhythms that span the relevant frequency range; some formulations involve real-time adaptation schemes for error-correction to align the phase and frequency of a dedicated circuit; others involve learning and dynamically adjusting expectations to make rhythm tracking predictions. Each of the approaches, while initially designed to answer specific questions, offers the possibility of being integrated into a larger framework that provides insights into our ability to perceive and generate rhythmic patterns.

4.
PLoS Comput Biol ; 19(6): e1011154, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37285380

RESUMO

A musician's spontaneous rate of movement, called spontaneous motor tempo (SMT), can be measured while spontaneously playing a simple melody. Data shows that the SMT influences the musician's tempo and synchronization. In this study we present a model that captures these phenomena. We review the results from three previously-published studies: solo musical performance with a pacing metronome tempo that is different from the SMT, solo musical performance without a metronome at a tempo that is faster or slower than the SMT, and duet musical performance between musicians with matching or mismatching SMTs. These studies showed, respectively, that the asynchrony between the pacing metronome and the musician's tempo grew as a function of the difference between the metronome tempo and the musician's SMT, musicians drifted away from the initial tempo toward the SMT, and the absolute asynchronies were smaller if musicians had matching SMTs. We hypothesize that the SMT constantly acts as a pulling force affecting musical actions at a tempo different from a musician's SMT. To test our hypothesis, we developed a model consisting of a non-linear oscillator with Hebbian tempo learning and a pulling force to the model's spontaneous frequency. While the model's spontaneous frequency emulates the SMT, elastic Hebbian learning allows for frequency learning to match a stimulus' frequency. To test our hypothesis, we first fit model parameters to match the data in the first of the three studies and asked whether this same model would explain the data the remaining two studies without further tuning. Results showed that the model's dynamics allowed it to explain all three experiments with the same set of parameters. Our theory offers a dynamical-systems explanation of how an individual's SMT affects synchronization in realistic music performance settings, and the model also enables predictions about performance settings not yet tested.


Assuntos
Música , Elasticidade , Aprendizagem , Movimento , Humanos
5.
Brain Sci ; 12(12)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36552136

RESUMO

Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.

6.
Elife ; 112022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-36317963

RESUMO

Humans are social animals who engage in a variety of collective activities requiring coordinated action. Among these, music is a defining and ancient aspect of human sociality. Human social interaction has largely been addressed in dyadic paradigms, and it is yet to be determined whether the ensuing conclusions generalize to larger groups. Studied more extensively in non-human animal behavior, the presence of multiple agents engaged in the same task space creates different constraints and possibilities than in simpler dyadic interactions. We addressed whether collective dynamics play a role in human circle drumming. The task was to synchronize in a group with an initial reference pattern and then maintain synchronization after it was muted. We varied the number of drummers from solo to dyad, quartet, and octet. The observed lower variability, lack of speeding up, smoother individual dynamics, and leader-less inter-personal coordination indicated that stability increased as group size increased, a sort of temporal wisdom of crowds. We propose a hybrid continuous-discrete Kuramoto model for emergent group synchronization with a pulse-based coupling that exhibits a mean field positive feedback loop. This research suggests that collective phenomena are among the factors that play a role in social cognition.


Assuntos
Música , Animais , Comportamento Social , Relações Interpessoais , Comportamento Animal , Grupos de Autoajuda
7.
Exp Brain Res ; 240(6): 1775-1790, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35507069

RESUMO

A consistent relationship has been found between rhythmic processing and reading skills. Impairment of the ability to entrain movements to an auditory rhythm in clinical populations with language-related deficits, such as children with developmental dyslexia, has been found in both behavioral and neural studies. In this study, we explored the relationship between rhythmic entrainment, behavioral synchronization, reading fluency, and reading comprehension in neurotypical English- and Mandarin-speaking adults. First, we examined entrainment stability by asking participants to coordinate taps with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Next, we assessed behavioral synchronization by asking participants to coordinate taps with the syllables they produced while reading sentences as naturally as possible (tap to syllable task). Finally, we measured reading fluency and reading comprehension for native English and native Mandarin speakers. Stability of entrainment correlated strongly with tap to syllable task performance and with reading fluency, and both findings generalized across English and Mandarin speakers.


Assuntos
Dislexia , Leitura , Adulto , Criança , Humanos , Idioma , Movimento
8.
Front Psychol ; 13: 653696, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35282203

RESUMO

Musical rhythm abilities-the perception of and coordinated action to the rhythmic structure of music-undergo remarkable change over human development. In the current paper, we introduce a theoretical framework for modeling the development of musical rhythm. The framework, based on Neural Resonance Theory (NRT), explains rhythm development in terms of resonance and attunement, which are formalized using a general theory that includes non-linear resonance and Hebbian plasticity. First, we review the developmental literature on musical rhythm, highlighting several developmental processes related to rhythm perception and action. Next, we offer an exposition of Neural Resonance Theory and argue that elements of the theory are consistent with dynamical, radically embodied (i.e., non-representational) and ecological approaches to cognition and development. We then discuss how dynamical models, implemented as self-organizing networks of neural oscillations with Hebbian plasticity, predict key features of music development. We conclude by illustrating how the notions of dynamical embodiment, resonance, and attunement provide a conceptual language for characterizing musical rhythm development, and, when formalized in physiologically informed dynamical models, provide a theoretical framework for generating testable empirical predictions about musical rhythm development, such as the kinds of native and non-native rhythmic structures infants and children can learn, steady-state evoked potentials to native and non-native musical rhythms, and the effects of short-term (e.g., infant bouncing, infant music classes), long-term (e.g., perceptual narrowing to musical rhythm), and very-long term (e.g., music enculturation, musical training) learning on music perception-action.

9.
Dev Sci ; 24(5): e13103, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33570778

RESUMO

Previous work suggests that auditory-vestibular interactions, which emerge during bodily movement to music, can influence the perception of musical rhythm. In a seminal study on the ontogeny of musical rhythm, Phillips-Silver and Trainor (2005) found that bouncing infants to an unaccented rhythm influenced infants' perceptual preferences for accented rhythms that matched the rate of bouncing. In the current study, we ask whether nascent, diffuse coupling between auditory and motor systems is sufficient to bootstrap short-term Hebbian plasticity in the auditory system and explain infants' preferences for accented rhythms thought to arise from auditory-vestibular interactions. First, we specify a nonlinear, dynamical system in which two oscillatory neural networks, representing developmentally nascent auditory and motor systems, interact through weak, non-specific coupling. The auditory network was equipped with short-term Hebbian plasticity, allowing the auditory network to tune its intrinsic resonant properties. Next, we simulate the effect of vestibular input (e.g., infant bouncing) on infants' perceptual preferences for accented rhythms. We found that simultaneous auditory-vestibular training shaped the model's response to musical rhythm, enhancing vestibular-related frequencies in auditory-network activity. Moreover, simultaneous auditory-vestibular training, relative to auditory- or vestibular-only training, facilitated short-term auditory plasticity in the model, producing stronger oscillator connections in the auditory network. Finally, when tested on a musical rhythm, models which received simultaneous auditory-vestibular training, but not models that received auditory- or vestibular-only training, resonated strongly at frequencies related to their "bouncing," a finding qualitatively similar to infants' preferences for accented rhythms that matched the rate of infant bouncing.


Assuntos
Música , Estimulação Acústica , Percepção Auditiva , Humanos , Lactente , Movimento
10.
Biol Cybern ; 115(1): 43-57, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33399947

RESUMO

We study multifrequency Hebbian plasticity by analyzing phenomenological models of weakly connected neural networks. We start with an analysis of a model for single-frequency networks previously shown to learn and memorize phase differences between component oscillators. We then study a model for gradient frequency neural networks (GrFNNs) which extends the single-frequency model by introducing frequency detuning and nonlinear coupling terms for multifrequency interactions. Our analysis focuses on models of two coupled oscillators and examines the dynamics of steady-state behaviors in multiple parameter regimes available to the models. We find that the model for two distinct frequencies shares essential dynamical properties with the single-frequency model and that Hebbian learning results in stronger connections for simple frequency ratios than for complex ratios. We then compare the analysis of the two-frequency model with numerical simulations of the GrFNN model and show that Hebbian plasticity in the latter is locally dominated by a nonlinear resonance captured by the two-frequency model.


Assuntos
Aprendizagem , Redes Neurais de Computação
11.
PLoS Comput Biol ; 15(10): e1007371, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671096

RESUMO

Dancing and playing music require people to coordinate actions with auditory rhythms. In laboratory perception-action coordination tasks, people are asked to synchronize taps with a metronome. When synchronizing with a metronome, people tend to anticipate stimulus onsets, tapping slightly before the stimulus. The anticipation tendency increases with longer stimulus periods of up to 3500ms, but is less pronounced in trained individuals like musicians compared to non-musicians. Furthermore, external factors influence the timing of tapping. These factors include the presence of auditory feedback from one's own taps, the presence of a partner performing coordinated joint tapping, and transmission latencies (TLs) between coordinating partners. Phenomena like the anticipation tendency can be explained by delay-coupled systems, which may be inherent to the sensorimotor system during perception-action coordination. Here we tested whether a dynamical systems model based on this hypothesis reproduces observed patterns of human synchronization. We simulated behavior with a model consisting of an oscillator receiving its own delayed activity as input. Three simulation experiments were conducted using previously-published behavioral data from 1) simple tapping, 2) two-person alternating beat-tapping, and 3) two-person alternating rhythm-clapping in the presence of a range of constant auditory TLs. In Experiment 1, our model replicated the larger anticipation observed for longer stimulus intervals and adjusting the amplitude of the delayed feedback reproduced the difference between musicians and non-musicians. In Experiment 2, by connecting two models we replicated the smaller anticipation observed in human joint tapping with bi-directional auditory feedback compared to joint tapping without feedback. In Experiment 3, we varied TLs between two models alternately receiving signals from one another. Results showed reciprocal lags at points of alternation, consistent with behavioral patterns. Overall, our model explains various anticipatory behaviors, and has potential to inform theories of adaptive human synchronization.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Percepção do Tempo/fisiologia , Ciclos de Atividade , Antecipação Psicológica/fisiologia , Ciências Biocomportamentais , Simulação por Computador , Retroalimentação , Retroalimentação Sensorial/fisiologia , Humanos , Música , Periodicidade , Desempenho Psicomotor
12.
Hear Res ; 380: 100-107, 2019 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-31234108

RESUMO

Nonlinear responses to acoustic signals arise through active processes in the cochlea, which has an exquisite sensitivity and wide dynamic range that can be explained by critical nonlinear oscillations of outer hair cells. Here we ask how the interaction of critical nonlinearities with the basilar membrane and other organ of Corti components could determine tuning properties of the mammalian cochlea. We propose a canonical oscillator model that captures the dynamics of the interaction between the basilar membrane and organ of Corti, using a pair of coupled oscillators for each place along the cochlea. We analyze two models in which a linear oscillator, representing basilar membrane dynamics, is coupled to a nonlinear oscillator poised at a Hopf instability. The coupling in the first model is unidirectional, and that of the second is bidirectional. Parameters are determined by fitting 496 auditory-nerve (AN) tuning curves of macaque monkeys. We find that the unidirectionally and bidirectionally coupled models account equally well for threshold tuning. In addition, however, the bidirectionally coupled model exhibits low-amplitude, spontaneous oscillation in the absence of stimulation, predicting that phase locking will occur before a significant increase in firing frequency, in accordance with well known empirical observations. This leads us to a canonical oscillator cochlear model based on the fundamental principles of critical nonlinear oscillation and coupling dynamics. The model is more biologically realistic than widely used linear or nonlinear filter-based models, yet parsimoniously displays key features of nonlinear mechanistic models. It is efficient enough for computational studies of auditory perception and auditory physiology.


Assuntos
Percepção Auditiva , Cóclea/inervação , Células Ciliadas Auditivas Externas/fisiologia , Audição , Modelos Neurológicos , Estimulação Acústica , Animais , Vias Auditivas/fisiologia , Simulação por Computador , Macaca , Dinâmica não Linear , Oscilometria , Fatores de Tempo
13.
Phys Rev E ; 99(2-1): 022421, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30934299

RESUMO

We study mode locking in a canonical model of gradient frequency neural networks under periodic forcing. The canonical model is a generic mathematical model for a network of nonlinear oscillators tuned to a range of distinct frequencies. It is mathematically more tractable than biological neuron models and allows close analysis of mode-locking behaviors. Here we analyze individual modes of synchronization for a periodically forced canonical model and present a complete set of driven behaviors for all parameter regimes available in the model. Using a closed-form approximation, we show that the Arnold tongue (i.e., locking region) for k:m synchronization gets narrower as k and m increase. We find that numerical simulations of the canonical model closely follow the analysis of individual modes when forcing is weak, but they deviate at high forcing amplitudes for which oscillator dynamics are simultaneously influenced by multiple modes of synchronization.

14.
Ann N Y Acad Sci ; 1453(1): 125-139, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31021447

RESUMO

Previous research suggests that infants' perception of musical rhythm is fine-tuned to culture-specific rhythmic structures over the first postnatal year of human life. To date, however, little is known about the neurobiological principles that may underlie this process. In the current study, we used a dynamical systems model featuring neural oscillation and Hebbian plasticity to simulate infants' perceptual learning of culture-specific musical rhythms. First, we demonstrate that oscillatory activity in an untrained network reflects the rhythmic structure of either a Western or a Balkan training rhythm in a veridical fashion. Next, during a period of unsupervised learning, we show that the network learns the rhythmic structure of either a Western or a Balkan training rhythm through the self-organization of network connections. Finally, we demonstrate that the learned connections affect the networks' response to violations to the metrical structure of native and nonnative rhythms, a pattern of findings that mirrors the behavioral data on infants' perceptual narrowing to musical rhythms.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Modelos Neurológicos , Música , Plasticidade Neuronal/fisiologia , Periodicidade , Desenvolvimento Infantil/fisiologia , Humanos , Lactente , Aprendizagem/fisiologia
15.
Neuroimage ; 185: 96-101, 2019 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-30336253

RESUMO

Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.


Assuntos
Córtex Cerebral/fisiologia , Música , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Adulto , Mapeamento Encefálico/métodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Adulto Jovem
16.
Sci Rep ; 8(1): 8662, 2018 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-29849068

RESUMO

A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.

17.
Sci Rep ; 8(1): 6229, 2018 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-29670143

RESUMO

Prior expectations can bias evaluative judgments of sensory information. We show that information about a performer's status can bias the evaluation of musical stimuli, reflected by differential activity of the ventromedial prefrontal cortex (vmPFC). Moreover, we demonstrate that decreased susceptibility to this confirmation bias is (a) accompanied by the recruitment of and (b) correlated with the white-matter structure of the executive control network, particularly related to the dorsolateral prefrontal cortex (dlPFC). By using long-duration musical stimuli, we were able to track the initial biasing, subsequent perception, and ultimate evaluation of the stimuli, examining the full evolution of these biases over time. Our findings confirm the persistence of confirmation bias effects even when ample opportunity exists to gather information about true stimulus quality, and underline the importance of executive control in reducing bias.

18.
Phys Rev E ; 95(6-1): 062414, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28709287

RESUMO

Many neurons in the auditory system of the brain must encode periodic signals. These neurons under periodic stimulation display rich dynamical states including mode locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated sounds can lead to various forms of n:m mode-locked states, in which a neuron fires n action potentials per m cycles of the stimulus. Here, we study mode-locking in the Izhikevich neurons, a reduced model of the Hodgkin-Huxley neurons. The Izhikevich model is much simpler in terms of the dimension of the coupled nonlinear differential equations compared with other existing models, but excellent for generating the complex spiking patterns observed in real neurons. We obtained the regions of existence of the various mode-locked states on the frequency-amplitude plane, called Arnold tongues, for the Izhikevich neurons. Arnold tongue analysis provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. We find these tongues for both class-1 and class-2 excitable neurons in both deterministic and noisy regimes.


Assuntos
Modelos Neurológicos , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Animais , Percepção Auditiva/fisiologia , Simulação por Computador , Dinâmica não Linear , Periodicidade
19.
J Neurosci ; 37(26): 6331-6341, 2017 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-28559379

RESUMO

Most humans have a near-automatic inclination to tap, clap, or move to the beat of music. The capacity to extract a periodic beat from a complex musical segment is remarkable, as it requires abstraction from the temporal structure of the stimulus. It has been suggested that nonlinear interactions in neural networks result in cortical oscillations at the beat frequency, and that such entrained oscillations give rise to the percept of a beat or a pulse. Here we tested this neural resonance theory using MEG recordings as female and male individuals listened to 30 s sequences of complex syncopated drumbeats designed so that they contain no net energy at the pulse frequency when measured using linear analysis. We analyzed the spectrum of the neural activity while listening and compared it to the modulation spectrum of the stimuli. We found enhanced neural response in the auditory cortex at the pulse frequency. We also showed phase locking at the times of the missing pulse, even though the pulse was absent from the stimulus itself. Moreover, the strength of this pulse response correlated with individuals' speed in finding the pulse of these stimuli, as tested in a follow-up session. These findings demonstrate that neural activity at the pulse frequency in the auditory cortex is internally generated rather than stimulus-driven. The current results are both consistent with neural resonance theory and with models based on nonlinear response of the brain to rhythmic stimuli. The results thus help narrow the search for valid models of beat perception.SIGNIFICANCE STATEMENT Humans perceive music as having a regular pulse marking equally spaced points in time, within which musical notes are temporally organized. Neural resonance theory (NRT) provides a theoretical model explaining how an internal periodic representation of a pulse may emerge through nonlinear coupling between oscillating neural systems. After testing key falsifiable predictions of NRT using MEG recordings, we demonstrate the emergence of neural oscillations at the pulse frequency, which can be related to pulse perception. These findings rule out alternative explanations for neural entrainment and provide evidence linking neural synchronization to the perception of pulse, a widely debated topic in recent years.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Relógios Biológicos/fisiologia , Sincronização Cortical/fisiologia , Potenciais Evocados Auditivos/fisiologia , Periodicidade , Estimulação Acústica/métodos , Potenciais de Ação/fisiologia , Adulto , Sinais (Psicologia) , Retroalimentação Fisiológica , Feminino , Humanos , Masculino , Modelos Neurológicos , Música
20.
Front Neurosci ; 10: 257, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27375418

RESUMO

Human capacity for entraining movement to external rhythms-i.e., beat keeping-is ubiquitous, but its evolutionary history and neural underpinnings remain a mystery. Recent findings of entrainment to simple and complex rhythms in non-human animals pave the way for a novel comparative approach to assess the origins and mechanisms of rhythmic behavior. The most reliable non-human beat keeper to date is a California sea lion, Ronan, who was trained to match head movements to isochronous repeating stimuli and showed spontaneous generalization of this ability to novel tempos and to the complex rhythms of music. Does Ronan's performance rely on the same neural mechanisms as human rhythmic behavior? In the current study, we presented Ronan with simple rhythmic stimuli at novel tempos. On some trials, we introduced "perturbations," altering either tempo or phase in the middle of a presentation. Ronan quickly adjusted her behavior following all perturbations, recovering her consistent phase and tempo relationships to the stimulus within a few beats. Ronan's performance was consistent with predictions of mathematical models describing coupled oscillation: a model relying solely on phase coupling strongly matched her behavior, and the model was further improved with the addition of period coupling. These findings are the clearest evidence yet for parity in human and non-human beat keeping and support the view that the human ability to perceive and move in time to rhythm may be rooted in broadly conserved neural mechanisms.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...